Part 1:¶

Camera Calibration¶

Objective:¶

Determine the intrinsic parameters of 10 cameras using a dataset of checkerboard images.

Assignment Details:¶

1. Dataset Description:¶

The provided dataset contains images of a checkerboard pattern captured from 10 different cameras. These images will be used to calibrate each camera and find their intrinsic parameters. - Dataset https://drive.google.com/drive/folders/1g7M-IgGUQzDXCZ7XUEjWlApbBjPwXBq9?usp=drive_link

2. Task:¶

  • Write a well-commented Python script using OpenCV to perform camera calibration for each of the 10 cameras. Your script should:
    • Detect chessboard corners in each image.
    • Visualize and show corners
    • Use the detected corners to calibrate the cameras and find their intrinsic parameters (camera matrix and distortion coefficients).
    • Save and output the calibration matrices for each camera.

3. Submission Requirements:¶

  • Provide the complete Python script.
  • Include a report summarizing the intrinsic parameters for each camera and an explanation of the workflow.
  • Print how many images you were able to use to improve results.(what preprocessing techniques helped you the most and why you choose them)

Camera calibration¶

The script below performs camera calibration on multiple camera's and stores camera calibration parameters in the dictionary.

Potential challenges current in dataset¶

  1. Different size chessboard pattern e.g 8x8, 8x2, 7x4, etc.

Solution¶

  1. read all images from a folder
  2. create a empty dictionary to store camera related information
  3. read images name and extract the camera_id from the name e.g here extracting cam id 9 from image name 11Cam9.png
  4. extracting chessboard corners from given image
    • The given images contains different shapes of chessboards 8x8, 8x2, etc. to detect the chessboard we required prior information of the chessboard shape.
    • here I used dynamic shapes to detect the chessboard. basically it is a bruteforce method. since the largest chessboard has 8 tiles x 8 tiles. I used combination chessboard shapes from 7x7 to 3x3.
    • Reason for choosing 7 as a searching hyperparameter is because the largest chessboard is of shape 8x8 and for detecting chessboard we count the intermediate corners and 8x8 chessboard contains 7x7 intermidiate corners. reason for choosing 3 as searching parameter is to detect the chessboard, 3 is the smallest required parameter in cv2.findChessboardCornersSB.
    • I searched in the descending order i.e. from 7x7 to 3x3. to reduce possibility of wrongly detecting(False Positive) smaller chessboards in the actual bigger chessboard
In [6]:
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
from tqdm import tqdm
import re

# For counting frames with correctly identified chessboard pattern/frames used for calibration
count = 1

# Extracts the camera id from the image names
def get_cam_id(img_name):
    return re.search(r"(\d+)$", img_name).group()


# Path to the folder containing the images
image_folder = 'camera data\\'

pattern_wise_object_points = {}

# Chessboard square size (in mm) can be used if the chessboard square size is known
# square_size = 25

# Image format
image_format = '.png'

# initializing combinations of pattern sizes for searching the optimal chessboard size to detect the chessboard pattern 
# patterns are in descending order so that the bigger patterns are searched first to avoid detecting 
# intermediate smaller chessboard patterns i.e. avoiding detecting 3x3 pattern within 8x8 chessboard 
for i in range(8, 3, -1):
    for j in range(8, 3, -1):
        pattern_size = (i, j)
        
        # Prepare object points (0,0,0), (1,0,0), (2,0,0) ...., (9,6,0)
        objp = np.zeros((pattern_size[0]*pattern_size[1], 3), np.float32)
        objp[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2)
        # objp *= square_size
        
        # storing pattern size wise object points
        pattern_wise_object_points[pattern_size] = objp


# Get list of images in the folder
images = glob.glob(image_folder + '*' + image_format)

# intializing the dictionary for storing the calibration parametes per camera
camera_dict = {i:{"object_points": [], "image_points": []} for i in set([get_cam_id(img.split('.')[0]) for img in images]) }

# Loop through each image in the folder, tqdm used for creating progress bar
for fname in tqdm(images):
    # Extracting the camera id from image name
    cam_id = get_cam_id(fname.split('.')[0])
    
    # Read the image
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    # searching chessboard pattern in the current frame
    for k, v in pattern_wise_object_points.items():
        
        # Find the chessboard corners
        ret, corners = cv2.findChessboardCornersSB(gray, k, None)
        
        # If corners are found, add object points and image points against camera_id in the dictionary
        if ret:
            print('corners found!', k)
            
            # Updating the object points in camera dictionary
            temp_ojbect_points = camera_dict[cam_id]["object_points"]
            temp_ojbect_points.append(pattern_wise_object_points[k])
            camera_dict[cam_id]["object_points"] = temp_ojbect_points

            # Updating the image points in camera dictionary
            temp_image_points = camera_dict[cam_id]["image_points"]
            temp_image_points.append(corners)
            camera_dict[cam_id]["image_points"] = temp_image_points

            # Visualize and show corners
            img_corners = cv2.drawChessboardCorners(img, k, corners, ret)
            plt.imshow(cv2.cvtColor(img_corners, cv2.COLOR_BGR2RGB))
            plt.title(f'Chessboard Corners image no. {count}')
            count += 1
            plt.show()
            # breaking since the chessboard is detected
            break
  0%|                                                                                           | 0/97 [00:00<?, ?it/s]
corners found! (7, 7)
  2%|█▋                                                                                 | 2/97 [00:05<04:26,  2.81s/it]
corners found! (7, 7)
  4%|███▍                                                                               | 4/97 [00:09<03:24,  2.20s/it]
corners found! (7, 7)
  6%|█████▏                                                                             | 6/97 [00:13<03:12,  2.12s/it]
corners found! (7, 7)
  8%|██████▊                                                                            | 8/97 [00:18<03:45,  2.53s/it]
corners found! (7, 7)
  9%|███████▋                                                                           | 9/97 [00:20<03:16,  2.23s/it]
corners found! (7, 7)
 10%|████████▍                                                                         | 10/97 [00:22<03:15,  2.25s/it]
corners found! (7, 7)
 12%|██████████▏                                                                       | 12/97 [00:26<03:02,  2.14s/it]
corners found! (7, 5)
 13%|██████████▉                                                                       | 13/97 [00:28<02:50,  2.03s/it]
corners found! (7, 7)
 14%|███████████▊                                                                      | 14/97 [00:30<02:40,  1.93s/it]
corners found! (8, 4)
 19%|███████████████▏                                                                  | 18/97 [00:38<02:42,  2.06s/it]
corners found! (7, 7)
 21%|████████████████▉                                                                 | 20/97 [00:41<02:32,  1.98s/it]
corners found! (7, 7)
 23%|██████████████████▌                                                               | 22/97 [00:46<02:32,  2.03s/it]
corners found! (7, 7)
 24%|███████████████████▍                                                              | 23/97 [00:48<02:28,  2.01s/it]
corners found! (7, 7)
 25%|████████████████████▎                                                             | 24/97 [00:49<02:19,  1.92s/it]
corners found! (7, 7)
 26%|█████████████████████▏                                                            | 25/97 [00:51<02:10,  1.82s/it]
corners found! (8, 4)
 27%|█████████████████████▉                                                            | 26/97 [00:52<01:59,  1.68s/it]
corners found! (7, 7)
 28%|██████████████████████▊                                                           | 27/97 [00:54<02:02,  1.75s/it]
corners found! (7, 7)
 29%|███████████████████████▋                                                          | 28/97 [00:56<01:54,  1.67s/it]
corners found! (8, 4)
 30%|████████████████████████▌                                                         | 29/97 [00:57<01:43,  1.52s/it]
corners found! (7, 4)
 31%|█████████████████████████▎                                                        | 30/97 [00:58<01:39,  1.49s/it]
corners found! (7, 7)
 32%|██████████████████████████▏                                                       | 31/97 [01:00<01:39,  1.50s/it]
corners found! (7, 7)
 33%|███████████████████████████                                                       | 32/97 [01:01<01:36,  1.49s/it]
corners found! (7, 7)
 35%|████████████████████████████▋                                                     | 34/97 [01:05<01:43,  1.65s/it]
corners found! (7, 7)
 36%|█████████████████████████████▌                                                    | 35/97 [01:06<01:44,  1.68s/it]
corners found! (7, 7)
 37%|██████████████████████████████▍                                                   | 36/97 [01:08<01:36,  1.58s/it]
corners found! (7, 7)
 39%|████████████████████████████████                                                  | 38/97 [01:12<01:44,  1.77s/it]
corners found! (7, 7)
 40%|████████████████████████████████▉                                                 | 39/97 [01:13<01:38,  1.71s/it]
corners found! (7, 7)
 41%|█████████████████████████████████▊                                                | 40/97 [01:15<01:37,  1.71s/it]
corners found! (7, 5)
 42%|██████████████████████████████████▋                                               | 41/97 [01:16<01:29,  1.60s/it]
corners found! (7, 7)
 43%|███████████████████████████████████▌                                              | 42/97 [01:18<01:37,  1.78s/it]
corners found! (8, 4)
 45%|█████████████████████████████████████▏                                            | 44/97 [01:23<01:48,  2.04s/it]
corners found! (7, 7)
 46%|██████████████████████████████████████                                            | 45/97 [01:24<01:38,  1.90s/it]
corners found! (7, 7)
 49%|████████████████████████████████████████▌                                         | 48/97 [01:30<01:34,  1.93s/it]
corners found! (7, 7)
 53%|███████████████████████████████████████████                                       | 51/97 [01:36<01:32,  2.01s/it]
corners found! (7, 5)
 54%|███████████████████████████████████████████▉                                      | 52/97 [01:37<01:23,  1.86s/it]
corners found! (7, 7)
 55%|████████████████████████████████████████████▊                                     | 53/97 [01:39<01:18,  1.79s/it]
corners found! (7, 7)
 57%|██████████████████████████████████████████████▍                                   | 55/97 [01:43<01:16,  1.82s/it]
corners found! (7, 7)
 59%|████████████████████████████████████████████████▏                                 | 57/97 [01:46<01:13,  1.85s/it]
corners found! (7, 7)
 62%|██████████████████████████████████████████████████▋                               | 60/97 [01:52<01:12,  1.96s/it]
corners found! (7, 5)
 64%|████████████████████████████████████████████████████▍                             | 62/97 [01:56<01:06,  1.89s/it]
corners found! (7, 4)
 65%|█████████████████████████████████████████████████████▎                            | 63/97 [01:57<01:00,  1.79s/it]
corners found! (7, 7)
 66%|██████████████████████████████████████████████████████                            | 64/97 [01:59<00:56,  1.70s/it]
corners found! (7, 7)
 67%|██████████████████████████████████████████████████████▉                           | 65/97 [02:00<00:52,  1.65s/it]
corners found! (7, 7)
 68%|███████████████████████████████████████████████████████▊                          | 66/97 [02:02<00:50,  1.64s/it]
corners found! (7, 7)
 69%|████████████████████████████████████████████████████████▋                         | 67/97 [02:03<00:45,  1.52s/it]
corners found! (7, 7)
 70%|█████████████████████████████████████████████████████████▍                        | 68/97 [02:04<00:43,  1.50s/it]
corners found! (7, 7)
 72%|███████████████████████████████████████████████████████████▏                      | 70/97 [02:08<00:43,  1.62s/it]
corners found! (7, 7)
 74%|████████████████████████████████████████████████████████████▊                     | 72/97 [02:12<00:46,  1.85s/it]
corners found! (7, 7)
 77%|███████████████████████████████████████████████████████████████▍                  | 75/97 [02:18<00:45,  2.05s/it]
corners found! (7, 7)
 78%|████████████████████████████████████████████████████████████████▏                 | 76/97 [02:19<00:39,  1.88s/it]
corners found! (7, 7)
 80%|█████████████████████████████████████████████████████████████████▉                | 78/97 [02:24<00:39,  2.10s/it]
corners found! (7, 7)
 82%|███████████████████████████████████████████████████████████████████▋              | 80/97 [02:27<00:33,  1.99s/it]
corners found! (7, 7)
 84%|████████████████████████████████████████████████████████████████████▍             | 81/97 [02:29<00:29,  1.86s/it]
corners found! (8, 4)
 87%|███████████████████████████████████████████████████████████████████████           | 84/97 [02:35<00:25,  1.94s/it]
corners found! (7, 7)
 89%|████████████████████████████████████████████████████████████████████████▋         | 86/97 [02:39<00:22,  2.08s/it]
corners found! (7, 7)
 91%|██████████████████████████████████████████████████████████████████████████▍       | 88/97 [02:43<00:17,  1.99s/it]
corners found! (7, 7)
 93%|████████████████████████████████████████████████████████████████████████████      | 90/97 [02:46<00:13,  1.98s/it]
corners found! (7, 7)
 94%|████████████████████████████████████████████████████████████████████████████▉     | 91/97 [02:48<00:10,  1.82s/it]
corners found! (7, 7)
 95%|█████████████████████████████████████████████████████████████████████████████▊    | 92/97 [02:50<00:09,  1.80s/it]
corners found! (7, 7)
 96%|██████████████████████████████████████████████████████████████████████████████▌   | 93/97 [02:51<00:07,  1.76s/it]
corners found! (7, 7)
 98%|████████████████████████████████████████████████████████████████████████████████▎ | 95/97 [02:56<00:04,  2.10s/it]
corners found! (7, 7)
100%|██████████████████████████████████████████████████████████████████████████████████| 97/97 [03:01<00:00,  1.87s/it]
In [7]:
# calibrating all cameras 
for k, v in camera_dict.items():
    print('camera id', k)
    # Calibrate camera
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(v['object_points'], v['image_points'], gray.shape[::-1], None, None)
    
    # storing calibrated camera parameters in the dictionary against camera_id
    camera_dict[k]['camera_matrix'] = mtx
    camera_dict[k]['distortion_coefficients'] = dist
    # Print the intrinsic parameters of the camera
    print("Intrinsic Parameters (Camera Matrix):")
    print(mtx)
    print("\nDistortion Coefficients:")
    print(dist)
camera id 8
Intrinsic Parameters (Camera Matrix):
[[2.01226284e+03 0.00000000e+00 8.01532842e+02]
 [0.00000000e+00 1.95211779e+03 3.75133979e+02]
 [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

Distortion Coefficients:
[[ 1.47409711e+00 -2.57604182e+01  5.79125640e-03 -1.13321127e-01
   2.38245407e+02]]
camera id 10
Intrinsic Parameters (Camera Matrix):
[[1.82508231e+04 0.00000000e+00 6.39578331e+02]
 [0.00000000e+00 1.82759528e+04 3.58947813e+02]
 [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

Distortion Coefficients:
[[ 2.12263462e-01 -1.12365892e+03  6.60530850e-02 -8.24250361e-03
  -1.52105505e+00]]
camera id 9
Intrinsic Parameters (Camera Matrix):
[[1.31261953e+04 0.00000000e+00 6.37653704e+02]
 [0.00000000e+00 1.29957836e+04 3.59145520e+02]
 [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

Distortion Coefficients:
[[-4.70966847e+00  2.55999626e+03  3.41920036e-02  1.33462427e-02
   5.89385124e+00]]
In [ ]:
 

Part 2:¶

Multi-Camera System and 3D Trajectory Estimation¶

Objective:¶

Given images from 10 fixed cameras of a pitch with known dimensions, estimate the 3D coordinates of a moving ball, the pose of each camera, and predict the 3D trajectory of the ball for applications like LBW decision in cricket.

Assignment Details:¶

1. 3D Coordinate Estimation:¶

  • Explain how you would use the calibrated cameras to estimate the 3D coordinates of the ball on the pitch from multiple camera views.

2. Camera Pose Estimation:¶

  • Describe the method you would use to estimate the pose (position and orientation) of each camera relative to the pitch. Take the center of the pitch as (0,0,0). And the pitch plane as XY and height as Z.

3. 3D Trajectory Estimation:¶

  • Given several 3D points of the ball, including points after bounces and when the ball hits an object (e.g., a leg for LBW decisions), outline an approach to estimate the 3D trajectory of the ball.
  • Discuss how you would predict the trajectory post-impact to assist in decision-making systems like DRS in cricket.

4. Submission Requirements:¶

  • Share the tentative algorithms to do the same. For 3D estimations don't use Deep learning based techniques. Use of visual diagrams and code snippets is always welcome.

  • Bonus points for giving methods to optimize the measurements post all calculations.

Solution¶

3D Coordinate Estimation:¶

  • since the cameras are calibrated and assuming the cameras are synchronised such that the captured frame from all of the cameras shows the ball at same time instance

  • Firstly we will require to detect the ball in the camera frame for that a detection model can be used detection model returns class and 2D image coordinates

  • The image coordinates shows the location of the ball in the 2D space with respect to the current frame, this is of no use in 3D coordinates as image coordinates cannot be used "as is" in real word 3D coordinates

  • To translate 2D image coordinates into 3D real world coordinates multi camera setup can be used. to do that a origin can be considered as a center of the pitch and X, Y, Z axis accordingly.

  • One camera gives 2D information of the ball however if more than one camera is used then the 3D location of the ball can be calculated using "triangulation".

  • to put in simple words if we consider a pitch as a cuboid shaped having length x width x height. consider there are two cameras one facing straight to batsman from non-striker end. and other facing from side such a way bowler is at left hand side and batsman is at right hand size of the frame. in this case triangulation will work in follwing way

    • the first camera will give location of the ball in X-Y plane and second camera will give the location in Y-Z plane.
    • to get a 3D coordinate we can get X-Y coordinate from camera 1 and Z coordinate from camera 2
    • this give the 3D coordinate of the ball.
    • adding more cameras will result in more accurate location

3. 3D Trajectory Estimation:¶

  • If a ball is full toss it is relatively easy to estimate the 3d tragectory can be estimated by simply taking initial trajectory point and modelling it with the speed of the ball and the it's parabolic projectile motion.
  • Since the ball most of the time makes an impact with the pitch and the bowler can spin or swing the ball it is not a straight forward task to estimate the tragectory of the ball
  • For this the ball tragectory can be broken into 3 phases before impact tragectory, actual impact and lastly after impact tragectory.
    • Before impact tragectory: this can be done by capturing initial 3D points of the ball and applying object tracking(Kalman filter, sort, deepsort, etc.) and curve fitting algorithms on it.
    • Impact: The impact can be detected as the ball not follwing the estimated tragectory and with ball suddenly stopping at some points.
    • After Impact Tragectory: after impact tragectory can be determined by considering multiple factors such as the incident angle of the before impact tragectory, change of angle, velocity and 3D coordinates of ball after impact

Camera Pose Estimation:¶

  • To estimate each camera's position and orientation relative to the pitch, I'd first ensure they're calibrated. Then, using known markers on the pitch.
  • I'd calculate each camera's translation and rotation to align them with the pitch's coordinates, considering the center as (0,0,0) and the pitch plane as XY with height along Z.
  • This process involves aligning the cameras' views with the pitch's layout, providing accurate spatial information for subsequant analysis.
In [ ]:
 

Part 3:¶

Small Object Detection¶

Objective: Detect a small object (the ball) in the frames captured by the camera system. Assignment Details:

1. Detection Algorithm/Architecture:

  • Recommend an algorithm or neural network architecture that is well-suited for

detecting small objects like a cricket ball in the provided images. Justify your choice.

2. Submission Requirements:

  • Give network architecture details that would enable us to do this task and why.

If you want to modify existing models it is encouraged, and provide rationale if you do it from scratch.

  • Bonus points for providing optimization and hardware acceleration details.
In [ ]:
 

Detecting ball in a camera frame Algorithm/Architecture:¶

Object Detection is a pretty straight forward process and can be done using models architecture however when it comes to detecting small objects it becomes a difficult task as the small object can loose it's feature when we resize the image for model inference. Small object detection is difficult task though it is possible to build a system to detect the ball with higher accuracy

solution¶

1. model architecture:¶

  • the model architecture can be choose on the following parameters
    • Accuracy-inference time tradeoff: if the accuracy is important bulky models with higher number of parameters can be used. if inference time should be less(for realtime high fps inference) then the lighter models can be used.
    • region proposal based detection networks are slightly more accurate however the inference time is higher and region proposals based architecture makes them little less favorable to be used for the small object detection
    • single pass object detection algorithms like yolo(preferred) and ssd can be used for realtime inference. YOLO models are robust, accurate and most importantly scale invarient making yolo models a good choice of the small object detection
    • the input image size is also an important hyperparameter as size of image defines how good your object will look. It is a standard practice to resize the input image to specific lower resolution to reduce the number of parameters of the model to make realtime inference practically possible. the input image size should be finalized considering the Accuracy-inference time tradeoff
    • apart from this "Network Architecture Search" can also be used to search the optimal architecture of the model(computationally expensive)

2. dataset¶

  • The dataset can be prepared with two classes in mind ball and background. for ball images all of the images containing ball can used e.g. ball being bowled, ball on the grass, in the sky, over the stands, occluded ball(ball is in the hands of bowler), ball with same colour background(white ball in the bright sky, in front of white jersy, white ball on white boundary line)
  • second class data should be as diverse as possible to avoid false positives. these images will contain all of the images of the stadium without ball in it

3. network modifications¶

  • most important modification needs to be done is to change number of classes of model from 80 classes(coco dataset) to 1 class(ball, background is not considered as a class as any image without label is a background)
  • Doing transfer learning is recommended as using pretrained always yields better results while training new network

4. Optimization and Hardware Acceleration:¶

  • software based

    • Quantization: model parameters can be converted into FP16 or int8 to reduce the computational power required to perform model operations. (for int8 calibration dataset will required, from the same distribution)
    • Pruning: model parameters can be removed from the model to remove parameters which are making little to no contribution to make correct predictions.
    • Model Compression: Knowledge Distilation can be used to train smaller(student) models with the help of larger(Teacher) models. layer fusion can also be used to compress the model.
  • hardware based

    • NVIDIA GPUs: tensorRT can be used to optimise the model for optimised inference on the specific NVIDIA devices
    • Intel CPUs: OpenVino can be used to optimise the model for optimised inference on the Intel Chip families
    • Compiler accelerator: Apache TVM can be used to optimise the model to improve the performance of the model for faster inference (ARM devices)
  • combination of software and hardware based optimisations can be used to optimise the final model

In [ ]:
 

Part 4:¶

Camera Specifications¶

Objective:¶

Determine the ideal FPS (Frames Per Second) and shutter speed for capturing the ball's motion clearly, and specify the precision required in camera synchronization for effective multi-camera analysis.

Assignment Details:¶

1. FPS and Shutter Speed:¶

  • Suggest the ideal FPS and shutter speed for the cameras used in this setup to do 3d detection properly. Provide reasons for your choices, considering the speed of the ball and the need for clear images without motion blur. To prevent the the issues with the image below:

2. Camera Synchronization Precision:¶

  • Discuss the level of precision required in synchronizing the 10 cameras to ensure accurate 3D trajectory estimation and other analyses. Explain why this precision is necessary. And how to estimate those values.

3. Submission Requirements:¶

  • Discuss Camera settings to be used or camera specs required. Also discuss in brief why you came to this conclusion.

Solution¶

FPS calculations¶

highest ball bowled so far is 161.3 kmph by Shoaib Akhtar, i.e. 44.8 m/s. This means the ball travels ~45 meters in a second. if we capture that ball with 1 fps will see the ball as 45 meter long trail due to the motion blur. to get the best picture ball must be stationary this is not going to be the case here as the ball is going to be played.

  • ideally an infinite fps should be there to capture the ball still, in order to remove complete motion bulr, however it is not possible to achieve infinite fps
  • so the best approach would be defining acceptable ball movement distance, let's consider 5 cm as arbitory acceptable value of distance.
    • to calcualte correct value of the fps let's divide the total distance covered in a second with acceptable distance per frame 45/0.05 = 900 fps
    • since we the ball is travelling 5 cm per frame there will nominal motion blur however it'll be little no significant loss of details.
    • if more accuracy is needed the acceptable distance travelled by ball can be reduced from 5 cm to lower number this will result in the higher fps requirement for the camera

FPS required minimum 900 (considering 5 cm as acceptable ball movement per frame)¶

shutter speed¶

  • considering the 900 fps and 5 cm as acceptable ball movement per frame, 1/900 should be the minimum shutter speed.

camera synchronisation¶

  • considering the shutter speed as 1/900 i.e. the image is being captured within the 1/900 second. the camera synchronisation should be within 1/900 range with common origin of time. to ensure capturing event at a same time instance across all cameras.